We present a novel approach to improve the performance of learning-based speech dereverberation using accurate synthetic datasets. Our approach is designed to recover the reverb-free signal from a reverberant speech signal. We show that accurately simulating the low-frequency components of Room Impulse Responses (RIRs) is important to achieving good dereverberation. We use the GWA dataset that consists of synthetic RIRs generated in a hybrid fashion: an accurate wave-based solver is used to simulate the lower frequencies and geometric ray tracing methods simulate the higher frequencies. We demonstrate that speech dereverberation models trained on hybrid synthetic RIRs outperform models trained on RIRs generated by prior geometric ray tracing methods on four real-world RIR datasets.
translated by 谷歌翻译
主流对象检测器通常由两个子任务组成,包括由两个并行头部实现的分类和回归任务。这种经典的设计范式不可避免地会导致分类得分和本地化质量(IOU)之间的空间分布不一致。因此,本文从知识蒸馏的角度来减轻这种错位。首先,我们观察到,与轻量级学生相比,庞大的老师获得的和谐预测比例更高。基于这个有趣的观察,设计了一种新颖的和谐评分(HS),以估计分类和回归质量的一致性。 HS对两个子任务之间的关系进行建模,并被视为先验知识,以促进学生的和谐预测。其次,这种空间未对准将在提炼特征时会导致选择性区域的选择。为了减轻这个问题,通过灵活平衡分类和回归任务的贡献,提出了一种新颖的任务功能蒸馏(TFD)。最终,HD和TFD构成了所提出的方法,称为任务均衡蒸馏(TBD)。广泛的实验证明了该方法的巨大潜力和概括。具体而言,当配备TBD时,带有Resnet-50的视网膜在可可基准下获得41.0地图,表现优于最近的FGD和FRS。
translated by 谷歌翻译
由于文件传达了丰富的人类知识,并且通常存在于企业中,因此建筑文档的对话系统已经越来越兴趣。其中,如何理解和从文档中检索信息是一个具有挑战性的研究问题。先前的工作忽略了文档的视觉属性,并将其视为纯文本,从而导致不完整的方式。在本文中,我们提出了一个布局感知文档级信息提取数据集,以促进从视觉上丰富文档(VRD)中提取结构和语义知识的研究,以在对话系统中产生准确的响应。 Lie包含来自4,061页的产品和官方文件的三个提取任务的62K注释,成为我们最大的知识,成为最大的基于VRD的信息提取数据集。我们还开发了扩展基于令牌的语言模型的基准方法,以考虑像人类这样的布局功能。经验结果表明,布局对于基于VRD的提取至关重要,系统演示还验证了提取的知识可以帮助找到用户关心的答案。
translated by 谷歌翻译
近年来,自我监督学习(SSL)已广泛探索。特别是,生成的SSL在自然语言处理和其他AI领域(例如BERT和GPT的广泛采用)中获得了新的成功。尽管如此,对比度学习 - 严重依赖结构数据的增强和复杂的培训策略,这是图SSL的主要方法,而迄今为止,生成SSL在图形上的进度(尤其是GAES)尚未达到潜在的潜力。正如其他领域所承诺的。在本文中,我们确定并检查对GAE的发展产生负面影响的问题,包括其重建目标,训练鲁棒性和错误指标。我们提出了一个蒙版的图形自动编码器Graphmae,该图可以减轻这些问题,以预处理生成性自我监督图。我们建议没有重建图形结构,而是提议通过掩盖策略和缩放余弦误差将重点放在特征重建上,从而使GraphMae的强大训练受益。我们在21个公共数据集上进行了大量实验,以实现三个不同的图形学习任务。结果表明,Graphmae-A简单的图形自动编码器具有仔细的设计-CAN始终在对比度和生成性最新基准相比,始终产生优于性的表现。这项研究提供了对图自动编码器的理解,并证明了在图上的生成自我监督预训练的潜力。
translated by 谷歌翻译
我们提出了一个基于网格的神经网络(MESH2IR),以生成使用网格代表的室内3D场景的声脉冲响应(IRS)。国税局用于在交互式应用程序和音频处理中创建高质量的声音体验。我们的方法可以处理具有任意拓扑结构(2K -3M三角形)的输入三角网格。我们提出了一种新颖的训练技术,可以使用能量衰减缓解培训网格2IR并突出其优势。我们还表明,使用我们提出的技术对IRS进行预处理的培训MESH2IR可显着提高IR发电的准确性。我们通过使用图形卷积网络将3D场景网格转换为潜在空间,从而降低了网格空间中的非线性性。我们的网格2IR比CPU上的几何声学算法快200倍以上,并且在给定的室内3D场景中,在NVIDIA GEFORCE RTX 2080 TI GPU上可以在NVIDIA GEFORCE RTX 2080 TI GPU上产生10,000多个IRS。声学指标用于表征声学环境。我们表明,从我们的网格2IR中预测的IRS的声学指标与地面真相相匹配,误差少于10%。我们还强调了Mesh2ir对音频和语音处理应用的好处,例如语音覆盖和语音分离。据我们所知,我们的是第一种基于神经网络的方法,可以实时预测给定的3D场景网格。
translated by 谷歌翻译
图表神经网络(GNNS)在半监督学习场景中取得了显着的成功。图形神经网络中的消息传递机制有助于未标记的节点收集标记邻居的监督信号。在这项工作中,我们调查了一项广泛采用的半监督学习方法之一的一致性正则化的一致性,可以帮助提高图形神经网络的性能。我们重新审视图形神经网络的两种一致性正则化方法。一个是简单的一致性正则化(SCR),另一个是均值是均值 - 教师一致性正则化(MCR)。我们将一致性正则化方法与两个最先进的GNN结合起来并在OGBN-Products数据集上进行实验。通过一致性正常化,可以在具有和无外数据的OGBN-Products数据集中提高最先进的GNN的性能0.3%。
translated by 谷歌翻译
以前的周期 - 一致性对应学习方法通​​常利用图像补丁进行培训。在本文中,我们介绍了一种完全卷积的方法,它对推理过程更简单,更加连贯。在直接应用全面卷积训练的同时,在模型崩溃中,我们研究了这种崩溃现象背后的下划线原因,表明像素的绝对位置提供了易于完成循环一致的快捷方式,这阻碍了有意义的视觉表现的学习。为了打破这种绝对的位置捷径,我们建议将不同的作物应用于前向和后向框架,并采用特征翘曲来建立相同框架两种作物之间的对应关系。前者技术在前后跟踪处强制执行相应的像素以具有不同的绝对位置,并且后者有效地阻止前后轨道之间的快捷方式。在三个标签传播基准台上进行姿势跟踪,面部地标跟踪和视频对象分割,我们的方法在很大程度上提高了香草完全卷积循环一致性方法的结果,与自我监督最先进的方法相比,实现了非常竞争力的表现。我们的培训模型和代码可用于\ url {https://github.com/steve-tod/stfc3}。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译